10 research outputs found

    GPU Accelerated Multi-agent Path Planning Based on Grid Space Decomposition

    Get PDF
    In this work, we describe a simple and powerful method to implement real-time multi-agent path-ïŹnding on Graphics Processor Units (GPUs). The technique aims to ïŹnd potential paths for many thousands of agents, using the A* algorithm and an input grid map partitioned into blocks. We propose an implementation for the GPU that uses a search space decomposition approach to break down the forward search A* algorithm into parallel independently forward sub-searches. We show that this approach ïŹts well with the programming model of GPUs, enabling planning for many thousands of agents in parallel in real-time applications such as computer games and robotics. The paper describes this implementation using the Compute UniïŹed Device Architecture programming environment, and demonstrates its advantages in GPU performance compared to GPU implementation of Real-Time Adaptive A*

    On the effect of exploiting gpus for a more Eco-sustainable lease of life

    Get PDF
    It has been estimated that about 2% of global carbon dioxide emissions can be attributed to IT systems. Green (or sustainable) computing refers to supporting business critical computing needs with the least possible amount of power. This phenomenon changes the priorities in the design of new software systems and in the way companies handle existing ones. In this paper, we present the results of a research project aimed to develop a migration strategy to give an existing software system a new and more eco-sustainable lease of life. We applied a strategy for migrating a subject system that performs intensive and massive computation to a target architecture based on a Graphics Processing Unit (GPU). We validated our solution on a system for path finding robot simulations. An analysis on execution time and energy consumption indicated that: (i) the execution time of the migrated system is less than the execution time of the original system; and (ii) the migrated system reduces energy waste, so suggesting that it is more eco-sustainable than its original version. Our findings improve the body of knowledge on the effect of using the GPU in green computing. © 2015 World Scientific Publishing Company

    Using the GPU to Green an Intensive and Massive Computation System

    Get PDF
    In this paper, we present the early results of an ongoing project aimed at giving an existing software system a more eco-sustainable lease of life. We defined a strategy and a process for migrating a subject system that performs intensive and massive computation to a Graphics Processing Unit (GPU) based architecture. We validated our solutions on a software system for path finding robot simulations. An initial comparison on the energy consumption of the original system and the greened one has been also executed. The obtained results suggested that the application of our solution produced more eco-sustainable software

    Approximate TF-IDF based on topic extraction from massive message stream using the GPU

    Get PDF
    The Web is a constantly expanding global information space that includes disparate types of data and resources. Recent trends demonstrate the urgent need to manage the large amounts of data stream, especially in specific domains of application such as critical infrastructure systems, sensor networks, log file analysis, search engines and more recently, social networks. All of these applications involve large-scale data-intensive tasks, often subject to time constraints and space complexity. Algorithms, data management and data retrieval techniques must be able to process data stream, i.e., process data as it becomes available and provide an accurate response, based solely on the data stream that has already been provided. Data retrieval techniques often require traditional data storage and processing approach, i.e., all data must be available in the storage space in order to be processed. For instance, a widely used relevance measure is Term Frequency–Inverse Document Frequency (TF–IDF), which can evaluate how important a word is in a collection of documents and requires to a priori know the whole dataset. To address this problem, we propose an approximate version of the TF–IDF measure suitable to work on continuous data stream (such as the exchange of messages, tweets and sensor-based log files). The algorithm for the calculation of this measure makes two assumptions: a fast response is required, and memory is both limited and infinitely smaller than the size of the data stream. In addition, to face the great computational power required to process massive data stream, we present also a parallel implementation of the approximate TF–IDF calculation using Graphical Processing Units (GPUs). This implementation of the algorithm was tested on generated and real data stream and was able to capture the most frequent terms. Our results demonstrate that the approximate version of the TF–IDF measure performs at a level that is comparable to the solution of the precise TF–IDF measure

    Approximate TF–IDF based on topic extraction from massive message stream using the GPU

    Get PDF
    The Web is a constantly expanding global information space that includes disparate types of data and resources. Recent trends demonstrate the urgent need to manage the large amounts of data stream, especially in specific domains of application such as critical infrastructure systems, sensor networks, log file analysis, search engines and more recently, social networks. All of these applications involve large-scale data-intensive tasks, often subject to time constraints and space complexity. Algorithms, data management and data retrieval techniques must be able to process data stream, i.e., process data as it becomes available and provide an accurate response, based solely on the data stream that has already been provided. Data retrieval techniques often require traditional data storage and processing approach, i.e., all data must be available in the storage space in order to be processed. For instance, a widely used relevance measure is Term Frequency–Inverse Document Frequency (TF–IDF), which can evaluate how important a word is in a collection of documents and requires to a priori know the whole dataset. To address this problem, we propose an approximate version of the TF–IDF measure suitable to work on continuous data stream (such as the exchange of messages, tweets and sensor-based log files). The algorithm for the calculation of this measure makes two assumptions: a fast response is required, and memory is both limited and infinitely smaller than the size of the data stream. In addition, to face the great computational power required to process massive data stream, we present also a parallel implementation of the approximate TF–IDF calculation using Graphical Processing Units (GPUs). This implementation of the algorithm was tested on generated and real data stream and was able to capture the most frequent terms. Our results demonstrate that the approximate version of the TF–IDF measure performs at a level that is comparable to the solution of the precise TF–IDF measure

    Exploiting GPUs for multi-agent path planning on grid maps

    No full text
    Multi-agent path planning on grid maps is a challenging problem and has numerous real-life applications ranging from robotics to real-time strategy games and non-player characters in video games. A* is a cost-optimal forward search algorithm for path planning which scales up poorly in practice since both the search space and the branching factor grow exponentially in the number of agents. In this work, we propose an A* implementation for the Graphics Processor Units (GPUs) which uses as search space a grid map. The approach uses a search space decomposition to break down the forward search A* algorithm into parallel independently forward sub-searches. The solution offer no guarantees with respect to completeness and solution quality but exploits the computational capability of GPUs to accelerate path planning for many thousands of agents. The paper describes this implementation using the Compute Unified Device Architecture (CUDA) programming environment, and demonstrates its advantages in GPU performance compared to GPU implementation of Real-Time Adaptive A*

    Virtual Reality Rehabilitation Systems for Cancer Survivors: A Narrative Review of the Literature

    No full text
    Rehabilitation plays a crucial role in cancer care, as the functioning of cancer survivors is frequently compromised by impairments that can result from the disease itself but also from the long-term sequelae of the treatment. Nevertheless, the current literature shows that only a minority of patients receive physical and/or cognitive rehabilitation. This lack of rehabilitative care is a consequence of many factors, one of which includes the transportation issues linked to disability that limit the patient’s access to rehabilitation facilities. The recent COVID-19 pandemic has further shown the benefits of improving telemedicine and home-based rehabilitative interventions to facilitate the delivery of rehabilitation programs when attendance at healthcare facilities is an obstacle. In recent years, researchers have been investigating the benefits of the application of virtual reality to rehabilitation. Virtual reality is shown to improve adherence and training intensity through gamification, allow the replication of real-life scenarios, and stimulate patients in a multimodal manner. In our present work, we offer an overview of the present literature on virtual reality-implemented cancer rehabilitation. The existence of wide margins for technological development allows us to expect further improvements, but more randomized controlled trials are needed to confirm the hypothesis that VRR may improve adherence rates and facilitate telerehabilitation

    Serious games and in-cloud data analytics for the virtualization and personalization of rehabilitation treatments

    No full text
    During the last years, the significant increase in the number of patients in need of rehabilitation has generated an unsustainable economic impact on healthcare systems, implying a reduction in therapeutic supervision and support for each patient. To address this problem, this paper proposes a tele-rehabilitation system based on serious games and in-cloud data analytics services, in accordance with Industry 4.0 design principles regarding modularity, service orientation, decentralization, virtualization and real-time capability. The system, specialized for post-stroke patients, comprises components for real-time acquisition of patient's motor data and a decision support service for their analysis. Raw data, reports, and recommendations are made available on the cloud to clinical operators to remotely assess rehabilitation outcomes and dynamically improve therapies. Furthermore, the results of a pilot study on the clinical impact deriving from the adoption of the proposed solution, and of a qualitative analysis about its acceptance, are presented and discussed
    corecore